717 research outputs found

    Superadditivity of Quantum Channel Coding Rate with Finite Blocklength Joint Measurements

    Full text link
    The maximum rate at which classical information can be reliably transmitted per use of a quantum channel strictly increases in general with NN, the number of channel outputs that are detected jointly by the quantum joint-detection receiver (JDR). This phenomenon is known as superadditivity of the maximum achievable information rate over a quantum channel. We study this phenomenon for a pure-state classical-quantum (cq) channel and provide a lower bound on CN/NC_N/N, the maximum information rate when the JDR is restricted to making joint measurements over no more than NN quantum channel outputs, while allowing arbitrary classical error correction. We also show the appearance of a superadditivity phenomenon---of mathematical resemblance to the aforesaid problem---in the channel capacity of a classical discrete memoryless channel (DMC) when a concatenated coding scheme is employed, and the inner decoder is forced to make hard decisions on NN-length inner codewords. Using this correspondence, we develop a unifying framework for the above two notions of superadditivity, and show that for our lower bound to CN/NC_N/N to be equal to a given fraction of the asymptotic capacity CC of the respective channel, NN must be proportional to V/C2V/C^2, where VV is the respective channel dispersion quantity.Comment: To appear in IEEE Transactions on Information Theor

    On capacity of optical communications over a lossy bosonic channel with a receiver employing the most general coherent electro-optic feedback control

    Get PDF
    We study the problem of designing optical receivers to discriminate between multiple coherent states using coherent processing receivers---i.e., one that uses arbitrary coherent feedback control and quantum-noise-limited direct detection---which was shown by Dolinar to achieve the minimum error probability in discriminating any two coherent states. We first derive and re-interpret Dolinar's binary-hypothesis minimum-probability-of-error receiver as the one that optimizes the information efficiency at each time instant, based on recursive Bayesian updates within the receiver. Using this viewpoint, we propose a natural generalization of Dolinar's receiver design to discriminate MM coherent states each of which could now be a codeword, i.e., a sequence of NN coherent states each drawn from a modulation alphabet. We analyze the channel capacity of the pure-loss optical channel with a general coherent-processing receiver in the low-photon number regime and compare it with the capacity achievable with direct detection and the Holevo limit (achieving the latter would require a quantum joint-detection receiver). We show compelling evidence that despite the optimal performance of Dolinar's receiver for the binary coherent-state hypothesis test (either in error probability or mutual information), the asymptotic communication rate achievable by such a coherent-processing receiver is only as good as direct detection. This suggests that in the infinitely-long codeword limit, all potential benefits of coherent processing at the receiver can be obtained by designing a good code and direct detection, with no feedback within the receiver.Comment: 17 pages, 5 figure

    Fundamental Limits on Data Acquisition: Trade-offs between Sample Complexity and Query Difficulty

    Full text link
    We consider query-based data acquisition and the corresponding information recovery problem, where the goal is to recover kk binary variables (information bits) from parity measurements of those variables. The queries and the corresponding parity measurements are designed using the encoding rule of Fountain codes. By using Fountain codes, we can design potentially limitless number of queries, and corresponding parity measurements, and guarantee that the original kk information bits can be recovered with high probability from any sufficiently large set of measurements of size nn. In the query design, the average number of information bits that is associated with one parity measurement is called query difficulty (dΛ‰\bar{d}) and the minimum number of measurements required to recover the kk information bits for a fixed dΛ‰\bar{d} is called sample complexity (nn). We analyze the fundamental trade-offs between the query difficulty and the sample complexity, and show that the sample complexity of n=cmax⁑{k,(klog⁑k)/dΛ‰}n=c\max\{k,(k\log k)/\bar{d}\} for some constant c>0c>0 is necessary and sufficient to recover kk information bits with high probability as kβ†’βˆžk\to\infty

    Graph Matching in Correlated Stochastic Block Models for Improved Graph Clustering

    Full text link
    We consider community detection from multiple correlated graphs sharing the same community structure. The correlated graphs are generated by independent subsampling of a parent graph sampled from the stochastic block model. The vertex correspondence between the correlated graphs is assumed to be unknown. We consider the two-step procedure where the vertex correspondence between the correlated graphs is first revealed, and the communities are recovered from the union of the correlated graphs, which becomes denser than each single graph. We derive the information-theoretic limits for exact graph matching in general density regimes and the number of communities, and then analyze the regime of graph parameters, where one can benefit from the matching of the correlated graphs in recovering the latent community structure of the graphs.Comment: Allerton Conference 202

    Unequal Error Protection Querying Policies for the Noisy 20 Questions Problem

    Full text link
    In this paper, we propose an open-loop unequal-error-protection querying policy based on superposition coding for the noisy 20 questions problem. In this problem, a player wishes to successively refine an estimate of the value of a continuous random variable by posing binary queries and receiving noisy responses. When the queries are designed non-adaptively as a single block and the noisy responses are modeled as the output of a binary symmetric channel the 20 questions problem can be mapped to an equivalent problem of channel coding with unequal error protection (UEP). A new non-adaptive querying strategy based on UEP superposition coding is introduced whose estimation error decreases with an exponential rate of convergence that is significantly better than that of the UEP repetition coding introduced by Variani et al. (2015). With the proposed querying strategy, the rate of exponential decrease in the number of queries matches the rate of a closed-loop adaptive scheme where queries are sequentially designed with the benefit of feedback. Furthermore, the achievable error exponent is significantly better than that of random block codes employing equal error protection.Comment: To appear in IEEE Transactions on Information Theor

    Efficient Algorithms for Exact Graph Matching on Correlated Stochastic Block Models with Constant Correlation

    Full text link
    We consider the problem of graph matching, or learning vertex correspondence, between two correlated stochastic block models (SBMs). The graph matching problem arises in various fields, including computer vision, natural language processing and bioinformatics, and in particular, matching graphs with inherent community structure has significance related to de-anonymization of correlated social networks. Compared to the correlated Erdos-Renyi (ER) model, where various efficient algorithms have been developed, among which a few algorithms have been proven to achieve the exact matching with constant edge correlation, no low-order polynomial algorithm has been known to achieve exact matching for the correlated SBMs with constant correlation. In this work, we propose an efficient algorithm for matching graphs with community structure, based on the comparison between partition trees rooted from each vertex, by extending the idea of Mao et al. (2021) to graphs with communities. The partition tree divides the large neighborhoods of each vertex into disjoint subsets using their edge statistics to different communities. Our algorithm is the first low-order polynomial-time algorithm achieving exact matching between two correlated SBMs with high probability in dense graphs.Comment: ICML 202
    • …
    corecore